34 research outputs found

    Care to explain?:A critical epistemic in/justice based analysis of legal explanation obligations and ideals for ‘AI’-infused times

    Get PDF
    Fundamental legal explanation rights are seen to be in peril because of the use of inscru-table computational methods in decision making across important domains such as health care, welfare, and the judiciary. New technology-oriented explanation rules are created in response to this. As part of such rules, human explainers are tasked with re-humanizing the automated decisional processes. By providing their explainees with meaningful information, explainers are expected to help protect these decision subjects from AI-related harms such as wrongful discrimination, and to sustain their ability to participate in decision making about them in responsible ways.De Groot questions the merits, and the ideas behind these legislative approaches. Harms that are typically ascribed to the use of algorithms and modern ‘AI’ are not so different in character from harms that existed long before the ‘digital revolution.’ If explanation rights have a role to play as a tool against what De Groot describes as knowledge related wrong-doing, law has something to answer for since its explanation rules have thus far underserved those in less privileged societal positions; before and after decisions were automated.To conduct this critical questioning this thesis approaches explanation as a form of knowledge making. It builds a ‘re-idealized’ model of explanation duties based on val-ues described in the philosophical fields of epistemic justice and injustice. Starting from critical insights with regard to responsibly informed interaction in situations of social-informational inequality, the model relates duties of explanation care to different phases of an explanation cycle. The model is then applied in an analysis of the main explanation rules for administrative and medical decision making in The Netherlands. In ‘technology and regulation’ discus-sions, both domains are appealed to as benchmarks for the dignified treatment of ex-plainees. The analysis however teases out how the paradigms ignore important dimen-sions of decision making, and how explainers are not instructed to engage with explain-ees in ways that allow to fundamentally respect them as knowers and rights holders. By generating conceptual criticism and making practical, detailed points, the thesis demon-strates work that can be done to improve explanation regulation moving forward.<br/

    Explanation is a concept in an AI-induced crisis, they say. But badly explained pandemic politics illustrate how its core values have never been safe. Can we use the momentum?

    No full text
    Duties to explain decisions to individuals exist in laws and other types of regulation. Rules are stricter where dependencies of explainees are larger, where unequal powers add weight to the information imbalance. Of late, decision makers’ decreasing knowledge-ability of decision support technology is said to hit critical levels. Machine conclusions seem unreasonable, and dignitarian concerns are raised: humane treatment is said to depend on the ability to explain, especially in sensitive contexts. If this is true, why haven’t our most fundamental laws prevented this corrosion? Covert ‘algorithmic harms’ to groups and individuals were exposed in environments where explanation was regulated, and in sensitive contexts. Fundamental unsafety still slipped in, with highly disparate impact. Insights from the research fields of epistemic (in)justice help to understand how this happens. When social dynamics of knowledge practices go unchecked, epistemic authority easily becomes a factor of other powers, and patterns of marginalization appear. ‘Other’ people’s knowledge, capacities, and participation are wrongly excluded, dismissed, and misused. Wrongful knowledge is made, and harms play out on individual and collective levels. Core values of explanation promote the ability to recognize when, what, and who to trust and distrust with regard to what is professed. In democratic societies, this capability is highly depended upon. It is true that current challenges to these values are not sufficiently met by regulation, but this problem does not follow from technological developments – it precedes it. When the Corona crisis hit, National authorities based decisions with fundamental impact on people’s lives on real-time knowledge making. Many professed to build on expert advice, science, and technology, but still asked to be trusted for their political authority. Critical choices with regard to expertise and experts remained unexplained, concepts unreasoned. Whose jobs are crucial, who’s vulnerable, what does prioritizing health and safety mean? Patterns of marginalization appeared, policy measures have shown disparate impact. In times of crisis, the tendency to lean on authority rather than honest explanation and diverse knowledge co-creation is a recurring pattern. This contribution argues to use the dual momentum to assess and reinforce our explanation regulation. If we truly want it to express the fundamental importance of explanation, insights from the fields of epistemic (in)justice should lead the way. This contribution presents a working model of explanation as a type of interactive, testimonial practice to support such efforts

    Explanation is a concept in an AI-induced crisis, they say. But badly explained pandemic politics illustrate how its core values have never been safe. Can we use the momentum?

    No full text
    Duties to explain decisions to individuals exist in laws and other types of regulation. Rules are stricter where dependencies of explainees are larger, where unequal powers add weight to the information imbalance. Of late, decision makers’ decreasing knowledge-ability of decision support technology is said to hit critical levels. Machine conclusions seem unreasonable, and dignitarian concerns are raised: humane treatment is said to depend on the ability to explain, especially in sensitive contexts. If this is true, why haven’t our most fundamental laws prevented this corrosion? Covert ‘algorithmic harms’ to groups and individuals were exposed in environments where explanation was regulated, and in sensitive contexts. Fundamental unsafety still slipped in, with highly disparate impact. Insights from the research fields of epistemic (in)justice help to understand how this happens. When social dynamics of knowledge practices go unchecked, epistemic authority easily becomes a factor of other powers, and patterns of marginalization appear. ‘Other’ people’s knowledge, capacities, and participation are wrongly excluded, dismissed, and misused. Wrongful knowledge is made, and harms play out on individual and collective levels. Core values of explanation promote the ability to recognize when, what, and who to trust and distrust with regard to what is professed. In democratic societies, this capability is highly depended upon. It is true that current challenges to these values are not sufficiently met by regulation, but this problem does not follow from technological developments – it precedes it. When the Corona crisis hit, National authorities based decisions with fundamental impact on people’s lives on real-time knowledge making. Many professed to build on expert advice, science, and technology, but still asked to be trusted for their political authority. Critical choices with regard to expertise and experts remained unexplained, concepts unreasoned. Whose jobs are crucial, who’s vulnerable, what does prioritizing health and safety mean? Patterns of marginalization appeared, policy measures have shown disparate impact. In times of crisis, the tendency to lean on authority rather than honest explanation and diverse knowledge co-creation is a recurring pattern. This contribution argues to use the dual momentum to assess and reinforce our explanation regulation. If we truly want it to express the fundamental importance of explanation, insights from the fields of epistemic (in)justice should lead the way. This contribution presents a working model of explanation as a type of interactive, testimonial practice to support such efforts

    Care to explain?: A critical epistemic in/justice based analysis of legal explanation obligations and ideals for ‘AI’-infused times

    No full text
    Fundamental legal explanation rights are seen to be in peril because of the use of inscru-table computational methods in decision making across important domains such as health care, welfare, and the judiciary. New technology-oriented explanation rules are created in response to this. As part of such rules, human explainers are tasked with re-humanizing the automated decisional processes. By providing their explainees with meaningful information, explainers are expected to help protect these decision subjects from AI-related harms such as wrongful discrimination, and to sustain their ability to participate in decision making about them in responsible ways. De Groot questions the merits, and the ideas behind these legislative approaches. Harms that are typically ascribed to the use of algorithms and modern ‘AI’ are not so different in character from harms that existed long before the ‘digital revolution.’ If explanation rights have a role to play as a tool against what De Groot describes as knowledge related wrong-doing, law has something to answer for since its explanation rules have thus far underserved those in less privileged societal positions; before and after decisions were automated. To conduct this critical questioning this thesis approaches explanation as a form of knowledge making. It builds a ‘re-idealized’ model of explanation duties based on val-ues described in the philosophical fields of epistemic justice and injustice. Starting from critical insights with regard to responsibly informed interaction in situations of social-informational inequality, the model relates duties of explanation care to different phases of an explanation cycle. The model is then applied in an analysis of the main explanation rules for administrative and medical decision making in The Netherlands. In ‘technology and regulation’ discus-sions, both domains are appealed to as benchmarks for the dignified treatment of ex-plainees. The analysis however teases out how the paradigms ignore important dimen-sions of decision making, and how explainers are not instructed to engage with explain-ees in ways that allow to fundamentally respect them as knowers and rights holders. By generating conceptual criticism and making practical, detailed points, the thesis demon-strates work that can be done to improve explanation regulation moving forward

    Datatech is hot, maar mensenrechten zijn cool – dat laatste horen we te weinig

    No full text
    Beleid op datatechnologie is hard nodig de komende kabinetsperiode. Enthousiaste plannen over de inzet van big data en algoritmen moeten gepaard gaan met een even enthousiast verhaal over mensenrechtelijke waarborgen

    AI-infused decisions:“and a spoonful of dignity”

    No full text
    AI has the potential to make decisions and optimise processes – for example in medical treatments. But the new kind of AI-infused decision making works in obscure ways and we need transcriptions. Aviva de Groot describes in her blogpost, how to appreciate the aspect of dignity – an elusive ingredient of a ‘right to explanation’ – when thinking about automated decision-making

    Introduction

    No full text
    corecore